Goto

Collaborating Authors

 learning structured sparsity


Learning Structured Sparsity in Deep Neural Networks

Neural Information Processing Systems

High demand for computation resources severely hinders deployment of large-scale Deep Neural Networks (DNN) in resource constrained devices. In this work, we propose a Structured Sparsity Learning (SSL) method to regularize the structures (i.e., filters, channels, filter shapes, and layer depth) of DNNs. SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNN's evaluation. Experimental results show that SSL achieves on average 5.1X and 3.1X speedups of convolutional layer computation of AlexNet against CPU and GPU, respectively, with off-the-shelf libraries. These speedups are about twice speedups of non-structured sparsity; (3) regularize the DNN structure to improve classification accuracy. The results show that for CIFAR-10, regularization on layer depth reduces a 20-layer Deep Residual Network (ResNet) to 18 layers while improves the accuracy from 91.25% to 92.60%, which is still higher than that of original ResNet with 32 layers.


Reviews: Learning Structured Sparsity in Deep Neural Networks

Neural Information Processing Systems

Using group sparsity to turn off redundant parts of a CNN and improve its speed seems like a good idea. Indeed, significant speed-ups are obtained in a large variety of experiments, with little loss in accuracy and even sometimes a small improvement. The authors use group sparsity on several axes, including the number of filters and channels used, the shape of the filters (I didn't really understand how the authors deactivate efficiently certain filters sites, this should be clarified). The idea explored in the paper is thus rather straightforward, but it is a good and probably useful one. However, unless I missed something, there are many details missing: How is the group sparsity optimisation performed within the CNN training?


Learning Structured Sparsity in Deep Neural Networks

Wen, Wei, Wu, Chunpeng, Wang, Yandan, Chen, Yiran, Li, Hai

Neural Information Processing Systems

High demand for computation resources severely hinders deployment of large-scale Deep Neural Networks (DNN) in resource constrained devices. In this work, we propose a Structured Sparsity Learning (SSL) method to regularize the structures (i.e., filters, channels, filter shapes, and layer depth) of DNNs. SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNN's evaluation. Experimental results show that SSL achieves on average 5.1X and 3.1X speedups of convolutional layer computation of AlexNet against CPU and GPU, respectively, with off-the-shelf libraries. These speedups are about twice speedups of non-structured sparsity; (3) regularize the DNN structure to improve classification accuracy.